翻訳と辞書
Words near each other
・ Bayesian efficiency
・ Bayesian experimental design
・ Bayesian filtering
・ Bayesian Filtering Library
・ Bayesian game
・ Bayesian hierarchical modeling
・ Bayesian inference
・ Bayesian inference in marketing
・ Bayesian inference in motor learning
・ Bayesian inference in phylogeny
・ Bayesian inference using Gibbs sampling
・ Bayesian information criterion
・ Bayesian interpretation of kernel regularization
・ Bayesian Knowledge Tracing
・ Bayesian linear regression
Bayesian multivariate linear regression
・ Bayesian network
・ Bayesian Operational Modal Analysis
・ Bayesian optimization
・ Bayesian poisoning
・ Bayesian probability
・ Bayesian programming
・ Bayesian search theory
・ Bayesian statistics
・ Bayesian tool for methylation analysis
・ Bayesian vector autoregression
・ Bayet
・ Bayet Peak
・ Bayete
・ Bayete Smith


Dictionary Lists
翻訳と辞書 辞書検索 [ 開発暫定版 ]
スポンサード リンク

Bayesian multivariate linear regression : ウィキペディア英語版
Bayesian multivariate linear regression

In statistics, Bayesian multivariate linear regression is a
Bayesian approach to multivariate linear regression, i.e. linear regression where the predicted outcome is a vector of correlated random variables rather than a single scalar random variable. A more general treatment of this approach can be found in the article MMSE estimator.
==Details==

Consider a regression problem where the dependent variable to be
predicted is not a single real-valued scalar but an ''m''-length vector
of correlated real numbers. As in the standard regression setup, there
are ''n'' observations, where each observation ''i'' consists of ''k''-1
explanatory variables, grouped into a vector \mathbf_i
of length ''k'' (where a dummy variable with a value of 1 has been
added to allow for an intercept coefficient). This can be viewed as a
set of ''m'' related regression problems for each observation ''i'':
:y_ = \mathbf_i^\boldsymbol\beta_ + \epsilon_
:\cdots
:y_ = \mathbf_i^\boldsymbol\beta_ + \epsilon_
where the set of errors \\}
are all correlated. Equivalently, it can be viewed as a single regression
problem where the outcome is a row vector \mathbf_i^
and the regression coefficient vectors are stacked next to each other, as follows:
:\mathbf_i^ = \mathbf_i^\mathbf + \boldsymbol\epsilon_^.
The coefficient matrix B is a k \times m matrix where the coefficient vectors \boldsymbol\beta_1,\ldots,\boldsymbol\beta_m for each regression problem are stacked horizontally:
:\mathbf =
\begin
\begin \\ \boldsymbol\beta_1 \\ \\ \end
\cdots
\begin \\ \boldsymbol\beta_m \\ \\ \end
\end
=
\begin
\begin
\beta_ \\ \vdots \\ \beta_ \\
\end
\cdots
\begin
\beta_ \\ \vdots \\ \beta_ \\
\end
\end
.

The noise vector \boldsymbol\epsilon_ for each observation ''i''
is jointly normal, so that the outcomes for a given observation are
correlated:
:\boldsymbol\epsilon_i \sim N(0, \boldsymbol\Sigma_^2).
We can write the entire regression problem in matrix form as:
:\mathbf =\mathbf\mathbf + \mathbf,
where Y and E are n \times m matrices. The design matrix X is an n \times k matrix with the observations stacked vertically, as in the standard linear regression setup:
:
\mathbf = \begin \mathbf^_1 \\ \mathbf^_2 \\ \vdots \\ \mathbf^_n \end
= \begin x_ & \cdots & x_ \\
x_ & \cdots & x_ \\
\vdots & \ddots & \vdots \\
x_ & \cdots & x_
\end.

The classical, frequentists linear least squares solution is to simply estimate the matrix of regression coefficients \hat} = (\mathbf^\mathbf)^\mathbf^\mathbf.
To obtain the Bayesian solution, we need to specify the conditional likelihood and then find the appropriate conjugate prior. As with the univariate case of linear Bayesian regression, we will find that we can specify a natural conditional conjugate prior (which is scale dependent).
Let us write our conditional likelihood as〔Peter E. Rossi, Greg M. Allenby, Rob McCulloch. ''Bayesian Statistics and Marketing''. John Wiley & Sons, 2012, p. 32.〕
:\rho(\mathbf|\boldsymbol\Sigma_) \propto (\boldsymbol\Sigma_^)^ \exp(-\frac (\mathbf^ \mathbf \boldsymbol\Sigma_^) ) ,
writing the error \mathbf in terms of \mathbf,\mathbf, and \mathbf yields
:\rho(\mathbf|\mathbf,\mathbf,\boldsymbol\Sigma_) \propto (\boldsymbol\Sigma_^)^ \exp(-\frac ((\mathbf-\mathbf\mathbf (\mathbf-\mathbf\mathbf^ ) ) ,
We seek a natural conjugate prior—a joint density \rho(\mathbf,\Sigma_) which is of the same functional form as the likelihood. Since the likelihood is quadratic in \mathbf, we re-write the likelihood so it is normal in (\mathbf-\hat|\mathbf,\mathbf,\boldsymbol\Sigma_) \propto \boldsymbol\Sigma_^ \exp(-(\frac\mathbf^\mathbf \boldsymbol\Sigma_^))
(\boldsymbol\Sigma_^)^ \exp(-\frac ((\mathbf-\hat \mathbf^\mathbf(\mathbf-\hat^ ) )
,
:\mathbf = \mathbf - \hat
We would like to develop a conditional form for the priors:
:\rho(\mathbf,\boldsymbol\Sigma_) = \rho(\boldsymbol\Sigma_)\rho(\mathbf|\boldsymbol\Sigma_),
where \rho(\boldsymbol\Sigma_) is an inverse-Wishart distribution
and \rho(\mathbf|\boldsymbol\Sigma_) is some form of normal distribution in the matrix \mathbf. This is accomplished using the vectorization transformation, which converts the likelihood from a function of the matrices \mathbf, \hat(\mathbf), \hat = (\hat((\mathbf - \hat\mathbf^ \mathbf(\mathbf - \hat^) = (\mathbf - \hat(\mathbf^ \mathbf(\mathbf - \hat^ )
Let
: (\mathbf^ \mathbf(\mathbf - \hat^ ) = (\boldsymbol\Sigma_^ \otimes \mathbf^\mathbf )(\mathbf - \hat \otimes \mathbf denotes the Kronecker product of matrices A and B, a generalization of the outer product which multiplies an m \times n matrix by a p \times q matrix to generate an mp \times nq matrix, consisting of every combination of products of elements from the two matrices.
Then
:(\mathbf - \hat (\boldsymbol\Sigma_^ \otimes \mathbf^\mathbf )(\mathbf - \hat)^(\boldsymbol\Sigma_^ \otimes \mathbf^\mathbf )(\boldsymbol\beta-\hat)
which will lead to a likelihood which is normal in (\boldsymbol\beta - \hat).
With the likelihood in a more tractable form, we can now find a natural (conditional) conjugate prior.

抄文引用元・出典: フリー百科事典『 ウィキペディア(Wikipedia)
ウィキペディアで「Bayesian multivariate linear regression」の詳細全文を読む



スポンサード リンク
翻訳と辞書 : 翻訳のためのインターネットリソース

Copyright(C) kotoba.ne.jp 1997-2016. All Rights Reserved.